We describe PromptBoosting, a query-efficient procedure for building a text classifier from a neural language model (LM) without access to the LM's parameters, gradients, or hidden representations. This form of "black-box" classifier training has become increasingly important as the cost of training and inference in large-scale LMs grows. But existing black-box LM classifier learning approaches are themselves computationally inefficient, typically specializing LMs to the target task by searching in a large space of (discrete or continuous) prompts using zeroth-order optimization methods. Instead of directly optimizing in prompt space, PromptBoosting obtains a small pool of prompts via a gradient-free approach and then constructs a large pool of weak learners by pairing these prompts with different elements of the LM's output distribution. These weak learners are then ensembled using the AdaBoost algorithm. The entire learning process requires only a small number of forward passes and no backward pass. Experiments show that PromptBoosting achieves state-of-the-art performance in multiple black-box few-shot classification tasks, and matches or outperforms full fine-tuning in both few-shot and standard learning paradigms, while training 10x faster than existing black-box methods.
translated by 谷歌翻译
Robustness evaluation against adversarial examples has become increasingly important to unveil the trustworthiness of the prevailing deep models in natural language processing (NLP). However, in contrast to the computer vision domain where the first-order projected gradient descent (PGD) is used as the benchmark approach to generate adversarial examples for robustness evaluation, there lacks a principled first-order gradient-based robustness evaluation framework in NLP. The emerging optimization challenges lie in 1) the discrete nature of textual inputs together with the strong coupling between the perturbation location and the actual content, and 2) the additional constraint that the perturbed text should be fluent and achieve a low perplexity under a language model. These challenges make the development of PGD-like NLP attacks difficult. To bridge the gap, we propose TextGrad, a new attack generator using gradient-driven optimization, supporting high-accuracy and high-quality assessment of adversarial robustness in NLP. Specifically, we address the aforementioned challenges in a unified optimization framework. And we develop an effective convex relaxation method to co-optimize the continuously-relaxed site selection and perturbation variables and leverage an effective sampling method to establish an accurate mapping from the continuous optimization variables to the discrete textual perturbations. Moreover, as a first-order attack generation method, TextGrad can be baked into adversarial training to further improve the robustness of NLP models. Extensive experiments are provided to demonstrate the effectiveness of TextGrad not only in attack generation for robustness evaluation but also in adversarial defense.
translated by 谷歌翻译
Generative models have been widely studied in computer vision. Recently, diffusion models have drawn substantial attention due to the high quality of their generated images. A key desired property of image generative models is the ability to disentangle different attributes, which should enable modification towards a style without changing the semantic content, and the modification parameters should generalize to different images. Previous studies have found that generative adversarial networks (GANs) are inherently endowed with such disentanglement capability, so they can perform disentangled image editing without re-training or fine-tuning the network. In this work, we explore whether diffusion models are also inherently equipped with such a capability. Our finding is that for stable diffusion models, by partially changing the input text embedding from a neutral description (e.g., "a photo of person") to one with style (e.g., "a photo of person with smile") while fixing all the Gaussian random noises introduced during the denoising process, the generated images can be modified towards the target style without changing the semantic content. Based on this finding, we further propose a simple, light-weight image editing algorithm where the mixing weights of the two text embeddings are optimized for style matching and content preservation. This entire process only involves optimizing over around 50 parameters and does not fine-tune the diffusion model itself. Experiments show that the proposed method can modify a wide range of attributes, with the performance outperforming diffusion-model-based image-editing algorithms that require fine-tuning. The optimized weights generalize well to different images. Our code is publicly available at https://github.com/UCSB-NLP-Chang/DiffusionDisentanglement.
translated by 谷歌翻译
We integrate contrastive learning (CL) with adversarial learning to co-optimize the robustness and accuracy of code models. Different from existing works, we show that code obfuscation, a standard code transformation operation, provides novel means to generate complementary `views' of a code that enable us to achieve both robust and accurate code models. To the best of our knowledge, this is the first systematic study to explore and exploit the robustness and accuracy benefits of (multi-view) code obfuscations in code models. Specifically, we first adopt adversarial codes as robustness-promoting views in CL at the self-supervised pre-training phase. This yields improved robustness and transferability for downstream tasks. Next, at the supervised fine-tuning stage, we show that adversarial training with a proper temporally-staggered schedule of adversarial code generation can further improve robustness and accuracy of the pre-trained code model. Built on the above two modules, we develop CLAWSAT, a novel self-supervised learning (SSL) framework for code by integrating $\underline{\textrm{CL}}$ with $\underline{\textrm{a}}$dversarial vie$\underline{\textrm{w}}$s (CLAW) with $\underline{\textrm{s}}$taggered $\underline{\textrm{a}}$dversarial $\underline{\textrm{t}}$raining (SAT). On evaluating three downstream tasks across Python and Java, we show that CLAWSAT consistently yields the best robustness and accuracy ($\textit{e.g.}$ 11$\%$ in robustness and 6$\%$ in accuracy on the code summarization task in Python). We additionally demonstrate the effectiveness of adversarial learning in CLAW by analyzing the characteristics of the loss landscape and interpretability of the pre-trained models.
translated by 谷歌翻译
尽管促进机器学习(ML)公平的最新进展激增,但现有的主流方法主要需要培训或填充神经网络的整个权重以满足公平标准。但是,由于较大的计算和存储成本,低数据效率和模型隐私问题,对于那些大规模训练的模型来说,这通常是不可行的。在本文中,我们提出了一种称为FairreProgragr的新的通用公平学习范式,该范式结合了模型重编程技术。具体而言,Fairreprogrogram考虑了固定的神经模型,而是将输入一组扰动(称为公平触发器)附加到,该触发触发器在Min-Max公式下朝着公平标准调整为公平触发器。我们进一步介绍了一个信息理论框架,该框架解释了为什么以及在什么条件下,使用公平触发器可以实现公平目标。我们从理论和经验上都表明,公平触发器可以通过提供错误的人口统计信息来有效地掩盖固定ML模型的输出预测中的人口偏见,从而阻碍模型利用正确的人口统计信息来进行预测。对NLP和CV数据集进行的广泛实验表明,与在两个广泛使用的公平标准下,基于培训成本和数据依赖性的基于重新培训的方法相比,我们的方法可以实现更好的公平性改进。
translated by 谷歌翻译
班级学习(CIL)遭受了学习新添加的课程和保留先前学习的课堂知识之间臭名昭著的困境。通过存储重播的历史数据可以减轻灾难性的遗忘问题,这会导致内存开销以及预测更新。为了解决这一难题,我们建议在持续学习中利用“免费”外部未标记的数据查询。我们首先提出了一个带有查询的未标记数据(CIL-QUD)方案的CIL,其中我们仅存储一些过去的训练样本作为锚点,并每次都使用它们来查询相关的未标记示例。除了新的和过去存储的数据外,通过学习 - 验证(LWF)正规化器和班级平衡培训,有效地利用了查询未标记的未标记。除了保留对过去和当前任务的模型概括外,我们下一步研究CIL-QUD的对抗性鲁棒性问题。受到未标记的数据学习强大模型的成功启发,我们探索了一种新的鲁棒性感知的CIL设置,在此设置中,随着新任务不断出现,学习的对手鲁棒性必须抵制遗忘并被转移。尽管现有的选项很容易失败,但我们显示了查询的未标记数据可以继续受益,并无缝将CIL-QUD扩展到其可靠的版本RCIL-QUD中。广泛的实验表明,与以前的最新CIL方法相比,CIL-QUD在CIFAR-10和CIFAR-100上实现了可观的准确性。此外,Rcil-Qud确立了鲁棒性意识CIL的第一个强大里程碑。代码可在https://github.com/vita-group/cil-qud中找到。
translated by 谷歌翻译
可认证的鲁棒性是在安全至关重要的情况下采用深层神经网络(DNN)的高度理想的属性,但通常需要建立乏味的计算。主要障碍在于大型DNN中的大量非线性。为了权衡DNN表现力(要求更多的非线性)和鲁棒性认证可伸缩性(更喜欢线性性),我们提出了一种新颖的解决方案来通过“授予”适当的线性水平来策略性地操纵神经元。我们建议的核心是首先将无关紧要的依赖神经元线性化,以消除既有用于DNN性能的多余的非线性组件,又对其认证有害。然后,我们优化替换线性激活的相关斜率和截距,以恢复模型性能,同时保持认证性。因此,典型的神经元修剪可以被视为一种特殊情况,即授予固定零斜率和截距的线性功能,这可能过于限制网络灵活性并牺牲其性能。在多个数据集和网络骨架上进行的广泛实验表明,我们的线性嫁接可以有效地收紧认证界限; (2)在没有认证的鲁棒培训的情况下实现竞争性认证的鲁棒性(即CIFAR-10型号的30%改进); (3)将完整的验证扩展到具有17m参数的大型对抗训练的模型。代码可在https://github.com/vita-group/linearity-grafting上找到。
translated by 谷歌翻译
预训练是在各种下游任务上转移学习的广泛采用的起点。对彩票假说(LTH)的最新研究表明,这种巨大的预训练模型可以用极稀疏的子网(又称匹配子网络)代替,而无需牺牲可传递性。但是,实际的安全 - 重要应用程序通常在标准转移之外提出了更具挑战性的要求,这也要求这些子网克服对抗性脆弱性。在本文中,我们制定了一个更严格的概念,双赢彩票,其中预训练模型的位置可以在各种下游任务上独立传输,以在两个标准下达到相同的标准和可靠的概括正如完整的预培训模型可以做到的那样,对抗性训练制度。我们全面检查了各种训练机制,发现强大的预训练倾向于制作出更少的双赢彩票,其性能优于标准对应物。例如,在下游CIFAR-10/100数据集上,我们识别出具有标准的,快速的对抗性和对抗性预训练的双赢匹配子网,以89.26%/73.79%,89.26%/79.03%和91.41%的匹配培训。 /83.22%稀疏。此外,我们观察到获得的双赢彩票票可以在实用数据限制(例如1%和10%)下游方案下传输的数据效率更高。我们的结果表明,彩票票务方案以及数据限制的转移设置可以扩大稳健的预训练的好处。代码可在https://github.com/vita-group/double-win-lth上找到。
translated by 谷歌翻译
语音中的自我监督学习涉及在大规模的未注释的语音语料库上训练语音表示网络,然后将学习的表示形式应用于下游任务。由于语音中SSL学习的大多数下游任务主要集中在语音中的内容信息上,因此最理想的语音表示形式应该能够将不需要的变化(例如说话者的变化)从内容中删除。但是,解开扬声器非常具有挑战性,因为删除说话者的信息也很容易导致内容丢失,而后者的损害通常远远超过了前者的好处。在本文中,我们提出了一种新的SSL方法,该方法可以实现扬声器分解而不会严重丢失内容。我们的方法是根据休伯特框架改编的,并结合了解开机制,以使教师标签和博学的代表规范化。我们在一组与内容相关的下游任务上评估了说话者分解的好处,并观察到我们的扬声器示词表示的一致且著名的性能优势。
translated by 谷歌翻译
终身事件检测旨在逐步更新具有新事件类型和数据的模型,同时保留先前学习的旧类型的功能。一个关键的挑战是,当不断接受新数据训练时,该模型会灾难性地忘记旧类型。在本文中,我们介绍了情节记忆提示(EMP),以明确保留特定于任务的知识。我们的方法采用每个任务的连续提示,并进行了优化以指导模型预测并学习特定于事件的表示。在以前的任务中学习的EMP与后续任务中的模型一起携带,并且可以用作存储模块,以保持旧知识并转移到新任务。实验结果证明了我们方法的有效性。此外,我们还对终身学习中的新事件类型进行了全面分析。
translated by 谷歌翻译